Goto

Collaborating Authors

 access point




SupplementaryMaterial-WikiDO: ANewBenchmarkEvaluatingCross-ModalRetrieval forVision-LanguageModels

Neural Information Processing Systems

This has been addressed in7 prior work [4, 3] by finetuning VLMs on a given corpus for a given task [5] and8 conducting zero-shot evaluations on a new corpus [7]. However, the mere use of an9 unseen corpus for evaluation does not imply it is OOD. Q1 What do the instances that comprise the dataset represent (e.g., documents, photos,24 people,countries)? Pleaseprovideadescription.26 (a) We provide 384k image-text pairs. Q3 Does the dataset contain all possible instances or is it a sample (not necessarily ran-36 dom) of instances from a larger set? If the dataset is a sample, then what is the larger37 set?


DA T ASHEET: MOTIVE

Neural Information Processing Systems

Please see the most updated version here . Was there a specific task in mind? Was there a specific gap that needed to be filled? The MOTI VE dataset was created to promote the development of new drug-target interaction (DTI) prediction models based on both, existing relationships between compounds and their protein targets, and the similarity of JUMP Cell Painting morphological features of perturbed cells [2].The MOTI VE dataset was created with the DTI task in mind, and addresses a lack of graph-based biological datasets with empirical node features. Who created this dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? This dataset was created by the Carpenter-Singh Lab in the Imaging Platform at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. What support was needed to make this dataset? If there is an associated grant, provide the name of the grantor and the grant name and number, or if it was supported by a company or government agency, give those details.) The authors gratefully acknowledge an internship from the Massachusetts Life Sciences Center (to ES).





Appendix

Neural Information Processing Systems

Your goal is to label if an image matches a search query Images matching a query are called "relevant" You should make sure to label all the relevant images